Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.213
Filtrar
1.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38527337

RESUMO

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Assuntos
Aplicativos Móveis , Procedimentos de Cirurgia Plástica , Cirurgia Plástica , Humanos , Estados Unidos , Smartphone , Fotografação/métodos
2.
Biomed Eng Online ; 23(1): 32, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38475784

RESUMO

PURPOSE: This study aimed to investigate the imaging repeatability of self-service fundus photography compared to traditional fundus photography performed by experienced operators. DESIGN: Prospective cross-sectional study. METHODS: In a community-based eye diseases screening site, we recruited 65 eyes (65 participants) from the resident population of Shanghai, China. All participants were devoid of cataract or any other conditions that could potentially compromise the quality of fundus imaging. Participants were categorized into fully self-service fundus photography or traditional fundus photography group. Image quantitative analysis software was used to extract clinically relevant indicators from the fundus images. Finally, a statistical analysis was performed to depict the imaging repeatability of fully self-service fundus photography. RESULTS: There was no statistical difference in the absolute differences, or the extents of variation of the indicators between the two groups. The extents of variation of all the measurement indicators, with the exception of the optic cup area, were below 10% in both groups. The Bland-Altman plots and multivariate analysis results were consistent with results mentioned above. CONCLUSIONS: The image repeatability of fully self-service fundus photography is comparable to that of traditional fundus photography performed by professionals, demonstrating promise in large-scale eye disease screening programs.


Assuntos
Serviços de Saúde Comunitária , Glaucoma , Humanos , Estudos Transversais , Estudos Prospectivos , China , Fotografação/métodos , Fundo de Olho
3.
Int Ophthalmol ; 44(1): 41, 2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38334896

RESUMO

Diabetic retinopathy (DR) is the leading global cause of vision loss, accounting for 4.8% of global blindness cases as estimated by the World Health Organization (WHO). Fundus photography is crucial in ophthalmology as a diagnostic tool for capturing retinal images. However, resource and infrastructure constraints limit access to traditional tabletop fundus cameras in developing countries. Additionally, these conventional cameras are expensive, bulky, and not easily transportable. In contrast, the newer generation of handheld and smartphone-based fundus cameras offers portability, user-friendliness, and affordability. Despite their potential, there is a lack of comprehensive review studies examining the clinical utilities of these handheld (e.g. Zeiss Visuscout 100, Volk Pictor Plus, Volk Pictor Prestige, Remidio NMFOP, FC161) and smartphone-based (e.g. D-EYE, iExaminer, Peek Retina, Volk iNview, Volk Vistaview, oDocs visoScope, oDocs Nun, oDocs Nun IR) fundus cameras. This review study aims to evaluate the feasibility and practicality of these available handheld and smartphone-based cameras in medical settings, emphasizing their advantages over traditional tabletop fundus cameras. By highlighting various clinical settings and use scenarios, this review aims to fill this gap by evaluating the efficiency, feasibility, cost-effectiveness, and remote capabilities of handheld and smartphone fundus cameras, ultimately enhancing the accessibility of ophthalmic services.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Oftalmopatias , Humanos , Retinopatia Diabética/diagnóstico , Smartphone , Fundo de Olho , Retina , Oftalmopatias/diagnóstico , Fotografação/métodos , Cegueira
4.
Diabetes Care ; 47(2): 304-319, 2024 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-38241500

RESUMO

BACKGROUND: Diabetic macular edema (DME) is the leading cause of vision loss in people with diabetes. Application of artificial intelligence (AI) in interpreting fundus photography (FP) and optical coherence tomography (OCT) images allows prompt detection and intervention. PURPOSE: To evaluate the performance of AI in detecting DME from FP or OCT images and identify potential factors affecting model performances. DATA SOURCES: We searched seven electronic libraries up to 12 February 2023. STUDY SELECTION: We included studies using AI to detect DME from FP or OCT images. DATA EXTRACTION: We extracted study characteristics and performance parameters. DATA SYNTHESIS: Fifty-three studies were included in the meta-analysis. FP-based algorithms of 25 studies yielded pooled area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity of 0.964, 92.6%, and 91.1%, respectively. OCT-based algorithms of 28 studies yielded pooled AUROC, sensitivity, and specificity of 0.985, 95.9%, and 97.9%, respectively. Potential factors improving model performance included deep learning techniques, larger size, and more diversity in training data sets. Models demonstrated better performance when validated internally than externally, and those trained with multiple data sets showed better results upon external validation. LIMITATIONS: Analyses were limited by unstandardized algorithm outcomes and insufficient data in patient demographics, OCT volumetric scans, and external validation. CONCLUSIONS: This meta-analysis demonstrates satisfactory performance of AI in detecting DME from FP or OCT images. External validation is warranted for future studies to evaluate model generalizability. Further investigations may estimate optimal sample size, effect of class balance, patient demographics, and additional benefits of OCT volumetric scans.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Humanos , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/complicações , Edema Macular/diagnóstico por imagem , Edema Macular/etiologia , Inteligência Artificial , Tomografia de Coerência Óptica/métodos , Fotografação/métodos
5.
Klin Monbl Augenheilkd ; 241(1): 75-83, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38242135

RESUMO

Cataract is among the leading causes of visual impairment worldwide. Innovations in treatment have drastically improved patient outcomes, but to be properly implemented, it is necessary to have the right diagnostic tools. This review explores the cataract grading systems developed by researchers in recent decades and provides insight into both merits and limitations. To this day, the gold standard for cataract classification is the Lens Opacity Classification System III. Different cataract features are graded according to standard photographs during slit lamp examination. Although widely used in research, its clinical application is rare, and it is limited by its subjective nature. Meanwhile, recent advancements in imaging technology, notably Scheimpflug imaging and optical coherence tomography, have opened the possibility of objective assessment of lens structure. With the use of automatic lens anatomy detection software, researchers demonstrated a good correlation to functional and surgical metrics such as visual acuity, phacoemulsification energy, and surgical time. The development of deep learning networks has further increased the capability of these grading systems by improving interpretability and increasing robustness when applied to norm-deviating cases. These classification systems, which can be used for both screening and preoperative diagnostics, are of value for targeted prospective studies, but still require implementation and validation in everyday clinical practice.


Assuntos
Catarata , Cristalino , Facoemulsificação , Humanos , Estudos Prospectivos , Fotografação/métodos , Catarata/diagnóstico , Acuidade Visual , Facoemulsificação/métodos
6.
Indian J Ophthalmol ; 72(Suppl 2): S280-S296, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38271424

RESUMO

PURPOSE: To compare the quantification of intraretinal hard exudate (HE) using en face optical coherence tomography (OCT) and fundus photography. METHODS: Consecutive en face images and corresponding fundus photographs from 13 eyes of 10 patients with macular edema associated with diabetic retinopathy or Coats' disease were analyzed using the machine-learning-based image analysis tool, "ilastik." RESULTS: The overall measured HE area was greater with en face images than with fundus photos (en face: 0.49 ± 0.35 mm2 vs. fundus photo: 0.34 ± 0.34 mm2, P < 0.001). However, there was an excellent correlation between the two measurements (intraclass correlation coefficient [ICC] = 0.844). There was a negative correlation between HE area and central macular thickness (CMT) (r = -0.292, P = 0.001). However, HE area showed a positive correlation with CMT in the previous several months, especially in eyes treated with anti-vascular endothelial growth factor (VEGF) therapy (CMT 3 months before: r = 0.349, P = 0.001; CMT 4 months before: r = 0.287, P = 0.012). CONCLUSION: Intraretinal HE can be reliably quantified from either en face OCT images or fundus photography with the aid of an interactive machine learning-based image analysis tool. HE area changes lagged several months behind CMT changes, especially in eyes treated with anti-VEGF injections.


Assuntos
Retinopatia Diabética , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos , Técnicas de Diagnóstico Oftalmológico , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/complicações , Fotografação/métodos , Exsudatos e Transudatos/metabolismo
7.
J Biomed Opt ; 29(Suppl 1): S11524, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38292055

RESUMO

Significance: Compressed ultrafast photography (CUP) is currently the world's fastest single-shot imaging technique. Through the integration of compressed sensing and streak imaging, CUP can capture a transient event in a single camera exposure with imaging speeds from thousands to trillions of frames per second, at micrometer-level spatial resolutions, and in broad sensing spectral ranges. Aim: This tutorial aims to provide a comprehensive review of CUP in its fundamental methods, system implementations, biomedical applications, and prospect. Approach: A step-by-step guideline to CUP's forward model and representative image reconstruction algorithms is presented with sample codes and illustrations in Matlab and Python. Then, CUP's hardware implementation is described with a focus on the representative techniques, advantages, and limitations of the three key components-the spatial encoder, the temporal shearing unit, and the two-dimensional sensor. Furthermore, four representative biomedical applications enabled by CUP are discussed, followed by the prospect of CUP's technical advancement. Conclusions: CUP has emerged as a state-of-the-art ultrafast imaging technology. Its advanced imaging ability and versatility contribute to unprecedented observations and new applications in biomedicine. CUP holds great promise in improving technical specifications and facilitating the investigation of biomedical processes.


Assuntos
Processamento de Imagem Assistida por Computador , Fotografação , Fotografação/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
8.
BMC Med Inform Decis Mak ; 24(1): 25, 2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38273286

RESUMO

BACKGROUND: The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS: This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS: StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS: We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.


Assuntos
Aprendizado Profundo , Membrana Epirretiniana , Humanos , Membrana Epirretiniana/diagnóstico por imagem , Estudos Retrospectivos , Técnicas de Diagnóstico Oftalmológico , Fotografação/métodos
9.
Community Ment Health J ; 60(3): 457-469, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37874437

RESUMO

The importance of community involvement for both older adults and individuals coping with mental illness is well documented. Yet, barriers to community integration for adults with mental illness such as social stigma, discrimination, and economic marginalization are often exacerbated by increased health and mobility challenges among older adults. Using photovoice, nine older adults with mental illness represented their views of community in photographs and group discussions over a six-week period. Participant themes of community life included physical spaces, valued social roles, and access to resources in the community. Themes were anchored by older adults' perceptions of historical and cultural time comparisons between 'how things used to be' and 'how things are now.' Barriers to community integration were often related to factors such as age, mobility, and resources rather than to mental health status. Program evaluation results suggest photovoice can promote self-reflection, learning, and collaboration among older adults with mental illness.


Assuntos
Transtornos Mentais , Fotografação , Humanos , Idoso , Fotografação/métodos , Estigma Social , Transtornos Mentais/psicologia , Aprendizagem
11.
BMJ Open Ophthalmol ; 8(1)2023 12 06.
Artigo em Inglês | MEDLINE | ID: mdl-38057106

RESUMO

OBJECTIVE: To develop and validate an explainable artificial intelligence (AI) model for detecting geographic atrophy (GA) via colour retinal photographs. METHODS AND ANALYSIS: We conducted a prospective study where colour fundus images were collected from healthy individuals and patients with retinal diseases using an automated imaging system. All images were categorised into three classes: healthy, GA and other retinal diseases, by two experienced retinologists. Simultaneously, an explainable learning model using class activation mapping techniques categorised each image into one of the three classes. The AI system's performance was then compared with manual evaluations. RESULTS: A total of 540 colour retinal photographs were collected. Data was divided such that 300 images from each class trained the AI model, 120 for validation and 120 for performance testing. In distinguishing between GA and healthy eyes, the model demonstrated a sensitivity of 100%, specificity of 97.5% and an overall diagnostic accuracy of 98.4%. Performance metrics like area under the receiver operating characteristic (AUC-ROC, 0.988) and the precision-recall (AUC-PR, 0.952) curves reinforced the model's robust achievement. When differentiating GA from other retinal conditions, the model preserved a diagnostic accuracy of 96.8%, a precision of 90.9% and a recall of 100%, leading to an F1-score of 0.952. The AUC-ROC and AUC-PR scores were 0.975 and 0.909, respectively. CONCLUSIONS: Our explainable AI model exhibits excellent performance in detecting GA using colour retinal images. With its high sensitivity, specificity and overall diagnostic accuracy, the AI model stands as a powerful tool for the automated diagnosis of GA.


Assuntos
Inteligência Artificial , Atrofia Geográfica , Humanos , Atrofia Geográfica/diagnóstico , Estudos Prospectivos , Cor , Fotografação/métodos
12.
Comput Biol Med ; 167: 107616, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37922601

RESUMO

Age-related macular degeneration (AMD) is a leading cause of vision loss in the elderly, highlighting the need for early and accurate detection. In this study, we proposed DeepDrAMD, a hierarchical vision transformer-based deep learning model that integrates data augmentation techniques and SwinTransformer, to detect AMD and distinguish between different subtypes using color fundus photographs (CFPs). The DeepDrAMD was trained on the in-house WMUEH training set and achieved high performance in AMD detection with an AUC of 98.76% in the WMUEH testing set and 96.47% in the independent external Ichallenge-AMD cohort. Furthermore, the DeepDrAMD effectively classified dryAMD and wetAMD, achieving AUCs of 93.46% and 91.55%, respectively, in the WMUEH cohort and another independent external ODIR cohort. Notably, DeepDrAMD excelled at distinguishing between wetAMD subtypes, achieving an AUC of 99.36% in the WMUEH cohort. Comparative analysis revealed that the DeepDrAMD outperformed conventional deep-learning models and expert-level diagnosis. The cost-benefit analysis demonstrated that the DeepDrAMD offers substantial cost savings and efficiency improvements compared to manual reading approaches. Overall, the DeepDrAMD represents a significant advancement in AMD detection and differential diagnosis using CFPs, and has the potential to assist healthcare professionals in informed decision-making, early intervention, and treatment optimization.


Assuntos
Aprendizado Profundo , Degeneração Macular , Humanos , Idoso , Diagnóstico Diferencial , Degeneração Macular/diagnóstico por imagem , Técnicas de Diagnóstico Oftalmológico , Fotografação/métodos
13.
Environ Monit Assess ; 195(11): 1381, 2023 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-37889358

RESUMO

Camera trap data are biased when an animal passes through a camera's field of view but is not recorded. Cameras that operate using passive infrared sensors rely on their ability to detect thermal energy from the surface of an object. Optimal camera deployment consequently depends on the relationship between a sensor array and an animal. Here, we describe a general, experimental approach to evaluate detection errors that arise from the interaction between cameras and animals. We adapted distance sampling models and estimated the combined effects of distance, camera model, lens height, and vertical angle on the probability of detecting three different body sizes representing mammals that inhabit temperate, boreal, and arctic ecosystems. Detection probabilities were best explained by a half-normal-logistic mixture and were influenced by all experimental covariates. Detection monotonically declined when proxies were ≥6 m from the camera; however, models show that body size and camera model mediated the effect of distance on detection. Although not a focus of our study, we found that unmodeled heterogeneity arising from solar position has the potential to bias inferences where animal movements vary over time. Understanding heterogeneous detection probabilities is valuable when designing and analyzing camera trap studies. We provide a general experimental and analytical framework that ecologists, citizen scientists, and others can use and adapt to optimize camera protocols for various wildlife species and communities. Applying our framework can help ecologists assess trade-offs that arise from interactions among distance, cameras, and body sizes before committing resources to field data collection.


Assuntos
Ecossistema , Fotografação , Animais , Fotografação/métodos , Monitoramento Ambiental , Animais Selvagens , Mamíferos
14.
Int Ophthalmol ; 43(12): 4851-4859, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37847478

RESUMO

PURPOSE: Early detection and treatment of diabetic retinopathy (DR) are critical for decreasing the risk of vision loss and preventing blindness. Community vision screenings may play an important role, especially in communities at higher risk for diabetes. To address the need for increased DR detection and referrals, we evaluated the use of artificial intelligence (AI) for screening DR. METHODS: Patient images of 124 eyes were obtained using a 45° Canon Non-Mydriatic CR-2 Plus AF retinal camera in the Department of Endocrinology Clinic (Newark, NJ) and in a community screening event (Newark, NJ). Images were initially classified by an onsite grader and uploaded for analysis by EyeArt, a cloud-based AI software developed by Eyenuk (California, USA). The images were also graded by an off-site retina specialist. Using Fleiss kappa analysis, a correlation was investigated between the three grading systems, the AI, onsite grader, and a US board-certified retina specialist, for a diagnosis of DR and referral pattern. RESULTS: The EyeArt results, onsite grader, and the retina specialist had a 79% overall agreement on the diagnosis of DR: 86 eyes with full agreement, 37 eyes with agreement between two graders, 1 eye with full disagreement. The kappa value for concordance on a diagnosis was 0.69 (95% CI 0.61-0.77), indicating substantial agreement. Referral patterns by EyeArt, the onsite grader, and the ophthalmologist had an 85% overall agreement: 96 eyes with full agreement, 28 eyes with disagreement. The kappa value for concordance on "whether to refer" was 0.70 (95% CI 0.60-0.80), indicating substantial agreement. Using the board-certified retina specialist as the gold standard, EyeArt had an 81% accuracy (101/124 eyes) for diagnosis and 83% accuracy (103/124 eyes) in referrals. For referrals, the sensitivity of EyeArt was 74%, specificity was 87%, positive predictive value was 72%, and negative predictive value was 88%. CONCLUSIONS: This retrospective cross-sectional analysis offers insights into use of AI in diabetic screenings and the significant role it will play in automated detection of DR. The EyeArt readings were beneficial with some limitations in a community screening environment. These limitations included a decreased accuracy in the presence of cataracts and the functional cost of EyeArt uploads in a community setting.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico , Inteligência Artificial , Estudos Transversais , Estudos Retrospectivos , Programas de Rastreamento/métodos , Fotografação/métodos
15.
Qual Health Res ; 33(12): 1049-1058, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37669693

RESUMO

In qualitative research, photographs and other visual data have been used with oral narratives in ethnography, interviews, and focus groups to convey and understand the perceptions, attitudes, and lived experiences of participants. Visual methodologies that incorporate photographic data include photo elicitation, which has varied approaches with the inclusion of photographs generated by researchers or participants, and Photovoice, which is a form of photo elicitation focused on participatory action research. Current literature provides insufficient guidance on a systematic coding process of visual data elements that could maximize capturing of visual data for qualitative analysis. We describe our rationale and process for developing a two-step systematic process for coding visual data, specifically photographs. The two-step systematic process for coding photographs involves coding the foreground (focal point) and then the background of the photograph, using separate codebooks. Application of this two-step coding approach resulted in surfacing additional rich data on the health-related contexts and environments in which participants lived. Incorporation of this methodology could enhance understanding of the context of health and generate ideas and new directions of inquiry.


Assuntos
Antropologia Cultural , Fotografação , Humanos , Pesquisa Qualitativa , Grupos Focais , Fotografação/métodos , Pesquisa sobre Serviços de Saúde
16.
J AAPOS ; 27(5): 308-309, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37714425

RESUMO

We describe a novel method for clinical ophthalmic photography that uses the inherent macro-photography mode available in most recent smartphones, without additional attachments. This method facilitates acquisition of high-quality external and anterior segment clinical photography in children who may have difficulty remaining still enough for anterior segment photography at the slit lamp. We describe this technique and discuss its advantages and limitations.


Assuntos
Segmento Anterior do Olho , Smartphone , Humanos , Criança , Segmento Anterior do Olho/diagnóstico por imagem , Microscopia com Lâmpada de Fenda , Lâmpada de Fenda , Fotografação/métodos
17.
Int J Legal Med ; 137(6): 1907-1920, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37702754

RESUMO

As focus distance (FD) sets perspective, it is an important consideration for the forensic analysis of faces in photographs, including those used for craniofacial superimposition. In the craniofacial superimposition domain, the PerspectiveX algorithm has been suggested for FD estimation. This algorithm uses a mean value of palpebral fissure length, as a scale, to estimate the FD. So far, PerspectiveX has not been validated for profile view photographs or for photographs taken with smartphones. This study tests PerspectiveX in both front and profile views, using multiple DSLR cameras, lenses and smartphones. In total, 1709 frontal and 1709 profile photographs of 10 adult participants were tested at 15 ground truth FDs using three DSLR cameras with 12 camera/lens combinations, five smartphone back cameras and four smartphone front cameras. Across all distances, PerspectiveX performed with a mean absolute error (MAE) of 11% and 12% for DSLR photographs in frontal and profile views, respectively, while errors doubled for frontal and profile photographs from smartphones (26% and 27%, respectively). This reverifies FD estimation for frontal DSLR photographs, validates FD estimates from profile view DSLR photographs and shows that FD estimation is currently inaccurate for smartphones. Until such time that FD estimations for facial photographs taken using smartphones improves, DSLR or 35 mm film images should continue to be sought for craniofacial superimpositions.


Assuntos
Fotografação , Smartphone , Adulto , Humanos , Fotografação/métodos , Algoritmos , Pálpebras , Medicina Legal
19.
J Dermatol Sci ; 112(2): 92-98, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37777361

RESUMO

BACKGROUND: The efficacy of therapeutic modalities for hair disease can be evaluated globally by photo assessment and more precisely by phototrichogram (PTG). However, the latter procedure is laborious, time consuming, subject to inter-observer variation, and requires hair clipping. OBJECTIVE: To establish an automated and patient/investigator friendly methodology enabling quantitative hair amount evaluation for daily clinical practice. METHODS: A novel automated numerical algorithm (aNA) adopting digital image binarization (i.e., black and white color conversion) was invented to evaluate hair coverage and measure PTG parameters in scalp images. Step-by-step improvement of aNA was attempted through comparative analyses of the data obtained respectively by the novel approach and conventional PTG/global photography assessment (GPA). RESULTS: For measuring scalp hair coverage, the initial version of aNA generally agreed with the cumulative hair diameter as assessed using PTG, showing a coefficient of 0.60. However, these outcomes were influenced by the angle of hair near the parting line. By integrating an angle compensation formula, the standard deviation of aNA data decreased from 5.7% to 1.2%. Consequently, the coefficient of determination for hair coverage calculated using the modified aNA and cumulative hair diameter assessed by PTG increased to 0.90. Furthermore, the change in hair coverage as determined by the modified aNA protocol correlated well with changes in the GPA score of images obtained using clinical trials. CONCLUSION: The novel aNA method provides a valuable tool for enabling simple and accurate evaluation of hair growth and volume for clinical trials and for treatment of hair disease.


Assuntos
Doenças do Cabelo , Couro Cabeludo , Humanos , Alopecia , Invenções , Cabelo/diagnóstico por imagem , Fotografação/métodos , Doenças do Cabelo/diagnóstico por imagem
20.
J Craniofac Surg ; 34(8): 2501-2505, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37603893

RESUMO

BACKGROUND: Double eyelid blepharoplasty has gained popularity over decades among Asians. Quantitative assessment of the morphologic changes after double eyelid blepharoplasty remains obscure. A photo-assisted digital method was introduced to measure the outcomes of double eyelid surgery in young Chinese. METHODS: A total of 168 Chinese patients who underwent esthetic upper blepharoplasty were recruited from October 2018 to October 2020. The participants were divided into mini-incision, full-incision, and full-incision double with epicanthoplasty (FIDE) groups. Changes in the eyeball exposure area (EEA), brow eyelid margin distance [brow eyelid distance (BED)], and palpebral crease height after surgery were analyzed using ImageJ software. RESULTS: There was an overall increase in EEA in the 3 groups after upper blepharoplasty surgery. The FIDE group showed the most increase in EEA among these groups. Furthermore, BED was significantly decreased in each group after upper eyelid blepharoplasty; however, the mini-incision double group showed the least BED reduction. The palpebral crease height at 90 days was significantly lower than that at 7 days after surgery. CONCLUSIONS: The photo-assisted anthropometric analysis offers a simple and objective measurement for double eyelid blepharoplasty. The eyes appear larger because of the increase in EEA and decrease in BED after double eyelid blepharoplasty. Distinct results were produced by different surgical techniques. The FIDE group showed the maximum increase in EEA and a decrease in BED. These findings provide important references for preoperative planning and postoperative measurement.


Assuntos
Blefaroplastia , Humanos , Povo Asiático , Blefaroplastia/métodos , População do Leste Asiático , Pálpebras/cirurgia , Pálpebras/anatomia & histologia , Estudos Retrospectivos , Fotografação/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...